3,068 research outputs found
Applications of the Conceptual Density Functional Theory Indices to Organic Chemistry Reactivity
Indexación: Web of ScienceTheoretical reactivity indices based on the conceptual Density Functional Theory (DFT) have become a powerful tool for the semiquantitative study of organic reactivity. A large number of reactivity indices have been proposed in the literature. Herein, global quantities like the electronic chemical potential μ, the electrophilicity ω and the nucleophilicity N indices, and local condensed indices like the electrophilic and nucleophilic Parr functions, as the most relevant indices for the study of organic reactivity, are discussed.http://www.mdpi.com/1420-3049/21/6/74
Long-Lived Non-Equilibrium Interstitial-Solid-Solutions in Binary Mixtures
We perform particle resolved experimental studies on the heterogeneous
crystallisation process of two compo- nent mixtures of hard spheres. The
components have a size ratio of 0.39. We compared these with molecular dynamics
simulations of homogenous nucleation. We find for both experiments and
simulations that the final assemblies are interstitial solid solutions, where
the large particles form crystalline close-packed lattices, whereas the small
particles occupy random interstitial sites. This interstitial solution
resembles that found at equilibrium when the size ratios are 0.3 [Filion et
al., Phys. Rev. Lett. 107, 168302 (2011)] and 0.4 [Filion, PhD Thesis, Utrecht
University (2011)]. However, unlike these previous studies, for our system sim-
ulations showed that the small particles are trapped in the octahedral holes of
the ordered structure formed by the large particles, leading to long-lived
non-equilibrium structures in the time scales studied and not the equilibrium
interstitial solutions found earlier. Interestingly, the percentage of small
particles in the crystal formed by the large ones rapidly reaches a maximum of
around 14% for most of the packing fractions tested, unlike previous
predictions where the occupancy of the interstitial sites increases with the
system concentration. Finally, no further hopping of the small particles was
observed
The Inverse Amplitude Method and Adler Zeros
The Inverse Amplitude Method is a powerful unitarization technique to enlarge
the energy applicability region of Effective Lagrangians. It has been widely
used to describe resonances from Chiral Perturbation Theory as well as for the
Strongly Interacting Symmetry Breaking Sector. In this work we show how it can
be slightly modified to account also for the sub-threshold region,
incorporating correctly the Adler zeros required by chiral symmetry and
eliminating spurious poles. These improvements produce negligible effects on
the physical region.Comment: 17 pages, 4 figure
Using think-aloud interviews to characterize model-based reasoning in electronics for a laboratory course assessment
Models of physical systems are used to explain and predict experimental
results and observations. The Modeling Framework for Experimental Physics
describes the process by which physicists revise their models to account for
the newly acquired observations, or change their apparatus to better represent
their models when they encounter discrepancies between actual and expected
behavior of a system. While modeling is a nationally recognized learning
outcome for undergraduate physics lab courses, no assessments of students'
model-based reasoning exist for upper-division labs. As part of a larger effort
to create two assessments of students' modeling abilities, we used the Modeling
Framework to develop and code think-aloud problem-solving activities centered
on investigating an inverting amplifier circuit. This study is the second phase
of a multiphase assessment instrument development process. Here, we focus on
characterizing the range of modeling pathways students employ while
interpreting the output signal of a circuit functioning far outside its
recommended operation range. We end by discussing four outcomes of this work:
(1) Students engaged in all modeling subtasks, and they spent the most time
making measurements, making comparisons, and enacting revisions; (2) Each
subtask occurred in close temporal proximity to all over subtasks; (3)
Sometimes, students propose causes that do not follow logically from observed
discrepancies; (4) Similarly, students often rely on their experiential
knowledge and enact revisions that do not follow logically from articulated
proposed causes.Comment: 18 pages, 5 figure
Chiral extrapolation of light resonances from one and two-loop unitarized Chiral Perturbation Theory versus lattice results
We study the pion mass dependence of the rho(770) and f_0(600) masses and
widths from one and two-loop unitarized Chiral Perturbation Theory. We show the
consistency of one-loop calculations with lattice results for the M_rho, f_pi
and the isospin 2 scattering length a_20.Then, we develop and apply the
modified Inverse Amplitude Method formalism for two-loop ChPT. In contrast to
the f_0(600), the rho(770) is rather sensitive to the two-loop ChPT parameters
--our main source of systematic uncertainty. We thus provide two-loop
unitarized fits constrained by lattice information on M_rho, f_pi, by the qqbar
leading 1/N_c behavior of the rho and by existing estimates of low energy
constants. These fits yield relatively stable predictions up to m_pi\simeq
300-350 MeV for the rho coupling and width as well as for all the f_0(600)
parameters. We confirm, to two-loops, the weak m_pi dependence of the rho
coupling and the KSRF relation, and the existence of two virtual f_0(600) poles
for sufficiently high m_pi. At two loops one of these poles becomes a bound
state when m_pi is somewhat larger than 300 MeV.Comment: 15 pages, to appear in Phys. Rev.
Continuum variational and diffusion quantum Monte Carlo calculations
This topical review describes the methodology of continuum variational and
diffusion quantum Monte Carlo calculations. These stochastic methods are based
on many-body wave functions and are capable of achieving very high accuracy.
The algorithms are intrinsically parallel and well-suited to petascale
computers, and the computational cost scales as a polynomial of the number of
particles. A guide to the systems and topics which have been investigated using
these methods is given. The bulk of the article is devoted to an overview of
the basic quantum Monte Carlo methods, the forms and optimisation of wave
functions, performing calculations within periodic boundary conditions, using
pseudopotentials, excited-state calculations, sources of calculational
inaccuracy, and calculating energy differences and forces
Enhanced non-quark-antiquark and non-glueball Nc behavior of light scalar mesons
We show that the latest and very precise dispersive data analyses require a
large and very unnat- ural fine-tuning of the 1/Nc expansion at Nc = 3 if the
f_0(600) and K(800) light scalar mesons are to be considered predominantly
quark-antiquark states, which is not needed for light vector mesons. For this,
we use scattering observables whose 1/Nc corrections are suppressed further
than one power of 1/Nc for quark-antiquark or glueball states, thus enhancing
contributions of other nature. This is achieved without using unitarized ChPT,
but if it is used we can also show that it is not just that the coefficients of
the 1/Nc expansion are unnatural, but that the expansion itself does not even
follow the expected 1/Nc scaling of a glueball or a quark-antiquark meson.Comment: Discussion disfavoring a glueball interpretation added. Version
published in Phys. Rev.
- …